Once you have a map it’s useful to know how much you should be trusting it, any titer data could be run through the software and you could get a map, but it wouldn’t necessarily be useful or at all representative of patterns of reactivity in the data.
There are several approached that can be taken to try and determine the utility of an antigenic map and any potential problems or problematic data. This includes:
Ultimately the aim is to include a function that produces a diagnostic web page where all these features can be analysed at once alongside others, but for now, I’ll concentrate on a subset that is currently available.
First we’ll get an example map using a subset of the data from H3N2 map published in 2004.
library(Racmacs) # Read in the 2004 h3 map map <- read.acmap(system.file("extdata/h3map2004.ace", package = "Racmacs")) # Scale down the point size a little agSize(map) <- 2 srSize(map) <- 2 # View the map view(map)
This is a good first check of map accuracy, ideally you want the detectable values to follow the x = y dashed line, with no biases.
# Simply call the function plot_map_table_distance() plotly_map_table_distance(map) #> Warning: replacing previous import 'vctrs::data_frame' by 'tibble::data_frame' #> when loading 'dplyr'
A good way to see how confidently points are positioned in a map is to peturb the underlying titers with some noise and then to remake the map and see how similar the positions are in the map, then to repeat this process multiple times. This is the idea of a noisy bootstrap, implemented using the bootstrapMap() function.
The function takes a number of arguments namely:
map: The antigenic map object to bootstrapbootstrap_repeats: The number of bootstrap repeats to performoptimizations_per_repeat: How many times the map should be reoptimized from scratch when searching for the best map for each noisy bootstrap repeat.ag_noise_sd: The standard deviation for the amount of noise to apply randomly on a per-antigen basis, this type of per antigen bias is often seen in titrations, perhaps due to variation in HAUs used for example.titer_noise_sd: The standard deviation for the amount of noise to apply randomly to every titer individually, this is the normal type of noise that people tend to consider.As you can imagine, a noisy bootstrap is a computationally intensive thing to do since you have to perform bootstrap_repeats * optimizations_per_repeat number of fresh optimizations, alongside all the other processing.
In the example below I’ve used 100 bootstrap repeats and 10 optimizations per repeat to keep things running a bit quicker but it wouldn’t be unreasonable to do 1000 bootstrap repeats and 500 optimizations per repeat, depending on the complexity of the map. Typically you’d just do this once and then save the resulting map along with bootstraps with the save.acmap() function.
# First of all run a series of bootstrap repeats on the map map <- bootstrapMap( map = map, bootstrap_repeats = 100, optimizations_per_repeat = 10, ag_noise_sd = 0.7, titer_noise_sd = 0.7 ) # Data on these bootstrap repeats is accessible with additional functions once it has been run on the map boostrap_ag_coords_list <- mapBootstrap_agCoords(map) boostrap_sr_coords_list <- mapBootstrap_srCoords(map)
If you view a map that has been bootstrapped, any points you click on will show how their position varies in each of the bootstrap repeats performed
view(map, select_ags = 4)